frank dellaert
MARVO: Marine-Adaptive Radiance-aware Visual Odometry
Sundar, Sacchin, Kikani, Atman, Alam, Aaliya, Shrote, Sumukh, Khan, A. Nayeemulla, Shahina, A.
Underwater visual localization remains challenging due to wavelength-dependent attenuation, poor texture, and non-Gaussian sensor noise. We introduce MARVO, a physics-aware, learning-integrated odometry framework that fuses underwater image formation modeling, differentiable matching, and reinforcement-learning optimization. At the front-end, we extend transformer-based feature matcher with a Physics Aware Radiance Adapter that compensates for color channel attenuation and contrast loss, yielding geometrically consistent feature correspondences under turbidity. These semi dense matches are combined with inertial and pressure measurements inside a factor-graph backend, where we formulate a keyframe-based visual-inertial-barometric estimator using GTSAM library. Each keyframe introduces (i) Pre-integrated IMU motion factors, (ii) MARVO-derived visual pose factors, and (iii) barometric depth priors, giving a full-state MAP estimate in real time. Lastly, we introduce a Reinforcement-Learningbased Pose-Graph Optimizer that refines global trajectories beyond local minima of classical least-squares solvers by learning optimal retraction actions on SE(2).
- Asia > India > Tamil Nadu > Chennai (0.05)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.34)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- (2 more...)
How-to Augmented Lagrangian on Factor Graphs
Bazzana, Barbara, Andreasson, Henrik, Grisetti, Giorgio
Factor graphs are a very powerful graphical representation, used to model many problems in robotics. They are widely spread in the areas of Simultaneous Localization and Mapping (SLAM), computer vision, and localization. In this paper we describe an approach to fill the gap with other areas, such as optimal control, by presenting an extension of Factor Graph Solvers to constrained optimization. The core idea of our method is to encapsulate the Augmented Lagrangian (AL) method in factors of the graph that can be integrated straightforwardly in existing factor graph solvers. We show the generality of our approach by addressing three applications, arising from different areas: pose estimation, rotation synchronization and Model Predictive Control (MPC) of a pseudo-omnidirectional platform. We implemented our approach using C++ and ROS. Besides the generality of the approach, application results show that we can favorably compare against domain specific approaches.
A1 SLAM: Quadruped SLAM using the A1's Onboard Sensors
Quadrupeds are highly versatile robots that can traverse over difficult terrain that wheeled mobile robots are unable to. This flexibility makes quadrupeds appealing for various applications, such as inspection, surveying construction sites, and search-and-rescue. However, to effectively perform these tasks autonomously, quadrupeds, as with other mobile robots, require a form of perception that will enable them to localize when placed in an environment without a priori knowledge. For robots to know its location in the environment, it must localize against a predefined map, but a robot can only create a map based on its known location. To solve this chicken-and-egg problem, simultaneous localization and mapping, or SLAM, is the standard approach used for mobile robots by optimizing for the robot's location and map simultaneously. The estimated poses and map from SLAM algorithms can then be used for downstream tasks such as facilitating controllers depending on the terrain or planning in navigation. Despite the recent developments in both quadruped robotics and in SLAM research, there has yet to be an open-source package that is specifically designed for high performing SLAM on quadrupeds.
Factor Graph Accelerator for LiDAR-Inertial Odometry
Hao, Yuhui, Yu, Bo, Liu, Qiang, Liu, Shaoshan, Zhu, Yuhao
Factor graph is a graph representing the factorization of a probability distribution function, and has been utilized in many autonomous machine computing tasks, such as localization, tracking, planning and control etc. We are developing an architecture with the goal of using factor graph as a common abstraction for most, if not, all autonomous machine computing tasks. If successful, the architecture would provide a very simple interface of mapping autonomous machine functions to the underlying compute hardware. As a first step of such an attempt, this paper presents our most recent work of developing a factor graph accelerator for LiDAR-Inertial Odometry (LIO), an essential task in many autonomous machines, such as autonomous vehicles and mobile robots. By modeling LIO as a factor graph, the proposed accelerator not only supports multi-sensor fusion such as LiDAR, inertial measurement unit (IMU), GPS, etc., but solves the global optimization problem of robot navigation in batch or incremental modes. Our evaluation demonstrates that the proposed design significantly improves the real-time performance and energy efficiency of autonomous machine navigation systems. The initial success suggests the potential of generalizing the factor graph architecture as a common abstraction for autonomous machine computing, including tracking, planning, and control etc.
- North America > United States > California > San Diego County > San Diego (0.05)
- Asia > China > Tianjin Province > Tianjin (0.04)
Handling Constrained Optimization in Factor Graphs for Autonomous Navigation
Bazzana, Barbara, Guadagnino, Tiziano, Grisetti, Giorgio
Factor graphs are graphical models used to represent a wide variety of problems across robotics, such as Structure from Motion (SfM), Simultaneous Localization and Mapping (SLAM) and calibration. Typically, at their core, they have an optimization problem whose terms only depend on a small subset of variables. Factor graph solvers exploit the locality of problems to drastically reduce the computational time of the Iterative Least-Squares (ILS) methodology. Although extremely powerful, their application is usually limited to unconstrained problems. In this paper, we model constraints over variables within factor graphs by introducing a factor graph version of the method of Lagrange Multipliers. We show the potential of our method by presenting a full navigation stack based on factor graphs. Differently from standard navigation stacks, we can model both optimal control for local planning and localization with factor graphs, and solve the two problems using the standard ILS methodology. We validate our approach in real-world autonomous navigation scenarios, comparing it with the de facto standard navigation stack implemented in ROS. Comparative experiments show that for the application at hand our system outperforms the standard nonlinear programming solver Interior-Point Optimizer (IPOPT) in runtime, while achieving similar solutions.
A New Trick Lets Artificial Intelligence See in 3D
The current wave of artificial intelligence can be traced back to 2012, and an academic contest that measured how well algorithms could recognize objects in photographs. That year, researchers found that feeding thousands of images into an algorithm inspired loosely by the way neurons in a brain respond to input produced a huge leap in accuracy. The breakthrough sparked an explosion in academic research and commercial activity that is transforming some companies and industries. Now a new trick, which involves training the same kind of AI algorithm to turn 2D images into a rich 3D view of a scene, is sparking excitement in the worlds of both computer graphics and AI. The technique has the potential to shake up video games, virtual reality, robotics, and autonomous driving.
- North America > United States > California > Alameda County > Berkeley (0.07)
- North America > United States > California > San Diego County > San Diego (0.06)
A New Trick Lets Artificial Intelligence See in 3D
The current wave of artificial intelligence can be traced back to 2012, and an academic contest that measured how well algorithms could recognize objects in photographs. That year, researchers found that feeding thousands of images into an algorithm inspired loosely by the way neurons in a brain respond to input produced a huge leap in accuracy. The breakthrough sparked an explosion in academic research and commercial activity that is transforming some companies and industries. Now a new trick, which involves training the same kind of AI algorithm to turn 2D images into a rich 3D view of a scene, is sparking excitement in the worlds of both computer graphics and AI. The technique has the potential to shake up video games, virtual reality, robotics, and autonomous driving.
#IROS2020 Plenary and Keynote talks focus series #2: Frank Dellaert & Ashish Deshpande
Last Wednesday we started this series of posts showcasing the plenary and keynote talks from the IEEE/RSJ IROS2020 (International Conference on Intelligent Robots and Systems). This is a great opportunity to stay up to date with the latest robotics & AI research from top roboticists in the world. Bio: Frank Dellaert is a Professor in the School of Interactive Computing at the Georgia Institute of Technology and a Research Scientist at Google AI. While on leave from Georgia Tech in 2016-2018, he served as Technical Project Lead at Facebook's Building 8 hardware division. Before that he was also Chief Scientist at Skydio, a startup founded by MIT grads to create intuitive interfaces for micro-aerial vehicles.
- Information Technology > Services (0.59)
- Health & Medicine > Therapeutic Area > Neurology (0.43)
- Education (0.39)